25 research outputs found

    A statistical model for in vivo neuronal dynamics

    Get PDF
    Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. Finally, we show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.Comment: 31 pages, 10 figure

    The Hitchhiker's Guide to Nonlinear Filtering

    Get PDF
    Nonlinear filtering is the problem of online estimation of a dynamic hidden variable from incoming data and has vast applications in different fields, ranging from engineering, machine learning, economic science and natural sciences. We start our review of the theory on nonlinear filtering from the simplest `filtering' task we can think of, namely static Bayesian inference. From there we continue our journey through discrete-time models, which is usually encountered in machine learning, and generalize to and further emphasize continuous-time filtering theory. The idea of changing the probability measure connects and elucidates several aspects of the theory, such as the parallels between the discrete- and continuous-time problems and between different observation models. Furthermore, it gives insight into the construction of particle filtering algorithms. This tutorial is targeted at scientists and engineers and should serve as an introduction to the main ideas of nonlinear filtering, and as a segway to more advanced and specialized literature.Comment: 64 page

    Online Maximum Likelihood Estimation of the Parameters of Partially Observed Diffusion Processes

    Full text link
    We revisit the problem of estimating the parameters of a partially observed diffusion process, consisting of a hidden state process and an observed process, with a continuous time parameter. The estimation is to be done online, i.e. the parameter estimate should be updated recursively based on the observation filtration. Here, we use an old but under-exploited representation of the incomplete-data log-likelihood function in terms of the filter of the hidden state from the observations. By performing a stochastic gradient ascent, we obtain a fully recursive algorithm for the time evolution of the parameter estimate. We prove the convergence of the algorithm under suitable conditions regarding the ergodicity of the process consisting of state, filter, and tangent filter. Additionally, our parameter estimation is shown numerically to have the potential of improving suboptimal filters, and can be applied even when the system is not identifiable due to parameter redundancies. Online parameter estimation is a challenging problem that is ubiquitous in fields such as robotics, neuroscience, or finance in order to design adaptive filters and optimal controllers for unknown or changing systems

    The Neural Particle Filter

    Get PDF
    The robust estimation of dynamically changing features, such as the position of prey, is one of the hallmarks of perception. On an abstract, algorithmic level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing signals based on the history of observations, provides a mathematical framework for dynamic perception in real time. Since the general, nonlinear filtering problem is analytically intractable, particle filters are considered among the most powerful approaches to approximating the solution numerically. Yet, these algorithms prevalently rely on importance weights, and thus it remains an unresolved question how the brain could implement such an inference strategy with a neuronal population. Here, we propose the Neural Particle Filter (NPF), a weight-less particle filter that can be interpreted as the neuronal dynamics of a recurrently connected neural network that receives feed-forward input from sensory neurons and represents the posterior probability distribution in terms of samples. Specifically, this algorithm bridges the gap between the computational task of online state estimation and an implementation that allows networks of neurons in the brain to perform nonlinear Bayesian filtering. The model captures not only the properties of temporal and multisensory integration according to Bayesian statistics, but also allows online learning with a maximum likelihood approach. With an example from multisensory integration, we demonstrate that the numerical performance of the model is adequate to account for both filtering and identification problems. Due to the weightless approach, our algorithm alleviates the 'curse of dimensionality' and thus outperforms conventional, weighted particle filters in higher dimensions for a limited number of particles

    How to avoid the curse of dimensionality: scalability of particle filters with and without importance weights

    Full text link
    Particle filters are a popular and flexible class of numerical algorithms to solve a large class of nonlinear filtering problems. However, standard particle filters with importance weights have been shown to require a sample size that increases exponentially with the dimension D of the state space in order to achieve a certain performance, which precludes their use in very high-dimensional filtering problems. Here, we focus on the dynamic aspect of this curse of dimensionality (COD) in continuous time filtering, which is caused by the degeneracy of importance weights over time. We show that the degeneracy occurs on a time-scale that decreases with increasing D. In order to soften the effects of weight degeneracy, most particle filters use particle resampling and improved proposal functions for the particle motion. We explain why neither of the two can prevent the COD in general. In order to address this fundamental problem, we investigate an existing filtering algorithm based on optimal feedback control that sidesteps the use of importance weights. We use numerical experiments to show that this Feedback Particle Filter (FPF) by Yang et al. (2013) does not exhibit a COD

    Learning as filtering: Implications for spike-based plasticity.

    Get PDF
    Most normative models in computational neuroscience describe the task of learning as the optimisation of a cost function with respect to a set of parameters. However, learning as optimisation fails to account for a time-varying environment during the learning process and the resulting point estimate in parameter space does not account for uncertainty. Here, we frame learning as filtering, i.e., a principled method for including time and parameter uncertainty. We derive the filtering-based learning rule for a spiking neuronal network-the Synaptic Filter-and show its computational and biological relevance. For the computational relevance, we show that filtering improves the weight estimation performance compared to a gradient learning rule with optimal learning rate. The dynamics of the mean of the Synaptic Filter is consistent with spike-timing dependent plasticity (STDP) while the dynamics of the variance makes novel predictions regarding spike-timing dependent changes of EPSP variability. Moreover, the Synaptic Filter explains experimentally observed negative correlations between homo- and heterosynaptic plasticity

    On the choice of metric in gradient-based theories of brain function

    Full text link
    The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Since a cost function by itself does not predict how the brain finds its minima, additional assumptions about the optimization method need to be made to predict the dynamics of physiological quantities. In this context, steepest descent (also called gradient descent) is often suggested as an algorithmic principle of optimization potentially implemented by the brain. In practice, researchers often consider the vector of partial derivatives as the gradient. However, the definition of the gradient and the notion of a steepest direction depend on the choice of a metric. Since the choice of the metric involves a large number of degrees of freedom, the predictive power of models that are based on gradient descent must be called into question, unless there are strong constraints on the choice of the metric. Here we provide a didactic review of the mathematics of gradient descent, illustrate common pitfalls of using gradient descent as a principle of brain function with examples from the literature and propose ways forward to constrain the metric.Comment: Revised version; 14 pages, 4 figure

    Gauge Freedom within the Class of Linear Feedback Particle Filters

    No full text
    Abedi E, Surace SC. Gauge Freedom within the Class of Linear Feedback Particle Filters. In: 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE; 2019: 666-671.Feedback particle filters (FPFs) are Monte-Carlo approximations of the solution of the filtering problem in continuous time. The samples or particles evolve according to a feedback control law in order to track the posterior distribution. However, it is known that by itself, the requirement to track the posterior does not lead to a unique algorithm. Given a particle filter, another one can be constructed by applying a time-dependent transformation of the particles that keeps the posterior distribution invariant. Here, we characterize this gauge freedom within the class of FPFs for the linear-Gaussian filtering problem, and thereby extend previously known parametrized families of linear FPFs

    How to Avoid the Curse of Dimensionality: Scalability of Particle Filters with and without Importance Weights

    Get PDF
    Particle filters are a popular and flexible class of numerical algorithms to solve a large class of nonlinear filtering problems. However, standard particle filters with importance weights have been shown to require a sample size that increases exponentially with the dimension DD of the state space in order to achieve a certain performance, which precludes their use in very high-dimensional filtering problems. Here, we focus on the dynamic aspect of this “curse of dimensionality” (COD) in continuous-time filtering, which is caused by the degeneracy of importance weights over time. We show that the degeneracy occurs on a time scale that decreases with increasing DD. In order to soften the effects of weight degeneracy, most particle filters use particle resampling and improved proposal functions for the particle motion. We explain why neither of the two can prevent the COD in general. In order to address this fundamental problem, we investigate an existing filtering algorithm based on optimal feedback control that sidesteps the use of importance weights. We use numerical experiments to show that this feedback particle filter (FPF) by [T. Yang, P. G. Mehta, and S. P. Meyn, IEEE Trans. Automat. Control, 58 (2013), pp. 2465--2480] does not exhibit a COD.ISSN:0036-1445ISSN:1095-720

    A Unification of Weighted and Unweighted Particle Filters

    No full text
    Particle filters (PFs), which are successful methods for approximating the solution of the filtering problem, can be divided into two types: weighted and unweighted PFs. It is well known that weighted PFs suffer from the weight degeneracy and curse of dimensionality. To sidestep these issues, unweighted PFs have been gaining attention, though they have their own challenges. The existing literature on these types of PFs is based on distinct approaches. In order to establish a connection, we put forward a framework that unifies weighted and unweighted PFs in the continuous time filtering problem. We show that the stochastic dynamics of a particle system described by a pair process, representing particles and their importance weights, should satisfy two necessary conditions in order for its distribution to match the solution of the Kushner-Stratonovich equation. In particular, we demonstrate that the bootstrap particle filter (BPF), which relies on importance sampling, and the feedback particle filter (FPF), which is an unweighted PF based on optimal control, arise as special cases from a broad class and that there is a smooth transition between the two. The freedom in designing the PF dynamics opens up potential ways to address the existing issues in the aforementioned algorithms, namely weight degeneracy in the BPF and gain estimation in the FPF.ISSN:0363-0129ISSN:1095-713
    corecore